6,929 research outputs found

    An elastic net orthogonal forward regression algorithm

    No full text
    In this paper we propose an efficient two-level model identification method for a large class of linear-in-the-parameters models from the observational data. A new elastic net orthogonal forward regression (ENOFR) algorithm is employed at the lower level to carry out simultaneous model selection and elastic net parameter estimation. The two regularization parameters in the elastic net are optimized using a particle swarm optimization (PSO) algorithm at the upper level by minimizing the leave one out (LOO) mean square error (LOOMSE). Illustrative examples are included to demonstrate the effectiveness of the new approaches

    Modeling of complex-valued Wiener systems using B-spline neural network

    No full text
    In this brief, a new complex-valued B-spline neural network is introduced in order to model the complex-valued Wiener system using observational input/output data. The complex-valued nonlinear static function in the Wiener system is represented using the tensor product from two univariate Bspline neural networks, using the real and imaginary parts of the system input. Following the use of a simple least squares parameter initialization scheme, the Gauss–Newton algorithm is applied for the parameter estimation, which incorporates the De Boor algorithm, including both the B-spline curve and the first-order derivatives recursion. Numerical examples, including a nonlinear high-power amplifier model in communication systems, are used to demonstrate the efficacy of the proposed approaches

    Elastic net prefiltering for two class classification

    No full text
    A two-stage linear-in-the-parameter model construction algorithm is proposed aimed at noisy two-class classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage which constructs a sparse linear-in-the-parameter classifier. The prefiltering stage is a two-level process aimed at maximizing a model’s generalization capability, in which a new elastic-net model identification algorithm using singular value decomposition is employed at the lower level, and then, two regularization parameters are optimized using a particle-swarm-optimization algorithm at the upper level by minimizing the leave-one-out (LOO) misclassification rate. It is shown that the LOO misclassification rate based on the resultant prefiltered signal can be analytically computed without splitting the data set, and the associated computational cost is minimal due to orthogonality. The second stage of sparse classifier construction is based on orthogonal forward regression with the D-optimality algorithm. Extensive simulations of this approach for noisy data sets illustrate the competitiveness of this approach to classification of noisy data problems

    Modelling and inverting complex-valued Wiener systems

    No full text
    We develop a complex-valued (CV) B-spline neural network approach for efficient identification and inversion of CV Wiener systems. The CV nonlinear static function in the Wiener system is represented using the tensor product of two univariate B-spline neural networks. With the aid of a least squares parameter initialisation, the Gauss-Newton algorithm effectively estimates the model parameters that include the CV linear dynamic model coefficients and B-spline neural network weights. The identification algorithm naturally incorporates the efficient De Boor algorithm with both the B-spline curve and first order derivative recursions. An accurate inverse of the CV Wiener system is then obtained, in which the inverse of the CV nonlinear static function of the Wiener system is calculated efficiently using the Gaussian-Newton algorithm based on the estimated B-spline neural network model, with the aid of the De Boor recursions. The effectiveness of our approach for identification and inversion of CV Wiener systems is demonstrated using the application of digital predistorter design for high power amplifiers with memor

    Efficient polarization entanglement purification based on parametric down-conversion sources with cross-Kerr nonlinearity

    Full text link
    We present a way for entanglement purification based on two parametric down-conversion (PDC) sources with cross-Kerr nonlinearities. It is comprised of two processes. The first one is a primary entanglement purification protocol for PDC sources with nondestructive quantum nondemolition (QND) detectors by transferring the spatial entanglement of photon pairs to their polarization. In this time, the QND detectors act as the role of controlled-not (CNot) gates. Also they can distinguish the photon number of the spatial modes, which provides a good way for the next process to purify the entanglement of the photon pairs kept more. In the second process for entanglement purification, new QND detectors are designed to act as the role of CNot gates. This protocol has the advantage of high yield and it requires neither CNot gates based on linear optical elements nor sophisticated single-photon detectors, which makes it more convenient in practical applications.Comment: 8 pages, 7 figure

    Magnetothermoelectric DC conductivities from holography models with hyperscaling factor in Lifshitz spacetime

    Full text link
    We investigate an Einstein-Maxwell-Dilaton-Axion holographic model and obtain two branches of a charged black hole solution with a dynamic exponent and a hyperscaling violation factor when a magnetic field presents. The magnetothermoelectric DC conductivities are then calculated in terms of horizon data by means of holographic principle. We find that linear temperature dependence resistivity and quadratic temperature dependence inverse Hall angle can be achieved in our model. The well-known anomalous temperature scaling of the Nernst signal and the Seebeck coefficient of cuprate strange metals are also discussed.Comment: 1+23 pages, 4 figures, references adde
    corecore